The second is the protection of confidentiality, prohibiting the use of generative AI in unpublished manuscripts or in peer review processes. Furthermore, there is a strong consensus: an AI cannot be listed as a scientific author, as it cannot assume ethical or legal responsibility for errors, false results, or impacts derived from the research. Human content as a differentiated value Specialists agree that the ecosystem is moving towards two simultaneous major dynamics. The debate, they point out, should shift to the creation process, evaluating whether the technology added value or eroded originality, ethics, and responsibility. Credibility remains linked to the human element Various studies show that identifying content as AI-generated does not reduce the preference for human texts, especially when it comes to complex, sensitive, or interpretative information. These shortcomings are not technical but structural, and they affect the perception of content, particularly in sensitive areas such as information, opinion, or specialized knowledge. In parallel, Alexandre López Borrull, also a professor at the UOC's Information and Communication Sciences Studies, highlights that human authorship allows for the incorporation of layers of context, values, narrative intent, and adaptation to the receiver, elements that are difficult to reproduce through automatic generation. Search engines, algorithms, and new value criteria Amidst the saturation of synthetic content, search engines and digital platforms have begun adjusting their algorithms to prioritize criteria such as EEAT (experience, expertise, authoritativeness, and trustworthiness), which favor content created or signed by individuals with a verifiable track record. This adjustment responds not only to technical criteria but also to an explicit social demand. For now, the human element regains symbolic and social value in an environment saturated with automation. AI is accepted for utilitarian tasks, summaries, or quick queries, but credibility continues to be associated with human authorship. In this context, labels like “made by a human”, “AI-assisted”, or “created by AI” could change their meaning over time. On one hand, the full acceptance of AI for producing fast and economical content. On the other, the valuation of content created without AI support as a distinctive quality mark, especially in areas where trust is central. Lalueza states that this distinction will depend on user preferences, while López Borrull proposes intermediate models, such as the “supervised by an expert” label, where AI acts as a tool and not as a substitute. Transparency, perception, and the limits of recognition Although the demand for transparency is high, the actual ability of people to identify synthetic content remains limited. According to the Pew Research Center, 76% of adults consider it very important to be able to identify whether a content was created by a person or by an AI. The incident sparked immediate criticism and reignited the debate on the limits, risks, and responsibilities of using AI in the production of informative content. For Ferran Lalueza, a professor at the UOC's Information and Communication Sciences Studies, these types of errors are not anecdotal but representative of a larger phenomenon: the growing difficulty in distinguishing between human and synthetic content, a boundary that will continue to blur as technology advances. Widespread use, fragmented trust The adoption of artificial intelligence is widespread, but it is not accompanied by an equivalent level of trust. In the video game industry, the No Gen AI seal is beginning to position itself as a benchmark. These labels seek to differentiate human content in an ecosystem dominated by automated production, and they respond to an emerging perception: the human element begins to be associated with quality, ethics, and emotional connection, in contrast to mechanical efficiency. Science sets clear limits to artificial authorship In the academic sphere, the response has been more unequivocal. The Pew Research Center reinforces this diagnosis: 50% of people say they are more concerned than enthusiastic, and 53% fear a progressive loss of human creative abilities. Technical capabilities versus structural limitations From an academic perspective, Lalueza emphasizes that AI lacks essential elements for building social trust: authenticity, clear traceability, ethical responsibility, verifiable commitment, and real empathy. The first is the demand for transparency, obliging to declare in which phases AI was used (translation, correction, auxiliary analysis). An international report conducted in 47 countries by the Melbourne Business School, in collaboration with KPMG, shows that 66% of the population uses AI habitually, but only 46% express full trust in its use. This gap is explained because, although AI is perceived as effective for processing data, relevant doubts persist about security, social impact, ethical use, and long-term consequences. From journalism to science, there is a growing demand for verifiable authorship
AI and Trust: The Limits of Authorship in the Digital Age
The article discusses the growing role of AI in content creation and the challenge of preserving human authorship. Experts note that trust in information remains associated with the human element, emphasizing the importance of transparency and ethical responsibility. In a digital space saturated with synthetic content, human content becomes a key differentiator of quality and credibility.